Lessons on maintaining your humanity in the world of AI technology
Illustration by Koto Feja for iStock/Getty Images
AI is not human. But it does a good job of acting like it.
It is capable of replicating how we speak, how we write and even how we solve problems.
So it’s easy to see why many consider it a threat, or at least a challenge, to our humanity.
That challenge is at the heart of a new book titled “AI and the Art of Being Human,” written by AI with the help of Jeff Abbott and Andrew Maynard. The book is described as a practical, optimistic and human-centric guide to navigating the age of artificial intelligence.
“Human qualities that will become more important as AI advances are qualities like curiosity, our capacity for wonder and awe, our ability to create value through relationships and … our capacity to love and be loved,” said Maynard, a scientist, writer and professor at Arizona State University’s School for the Future of Innovation in Society.
Here, Maynard and Abbott, a graduate of Thunderbird School of Global Management at ASU and the founding partner of Blitzscaling Ventures, a venture capital firm investing in startups, discuss the ways that AI can challenge our individuality and how we can hold on to what makes us uniquely human.
Note: Answers have been edited for length and/or clarity.
Question: What was the inspiration behind “AI and the Art of Being Human?”
Maynard: For me, it was the growing realization that, for the first time, we have a technology that is capable of replicating what we think of as uniquely defining who we are, and that is forcing us to ask what makes us us in a world of AI. These are questions that my students and others are asking with increasing frequency — how do I hold onto what makes me who I am and thrive when everything around us is changing so fast.
Q: How does AI impede or infringe upon the ability to be human?
Abbott: AI has the potential to further reduce human interaction and, with it, the opportunity to exercise compassion. Compassion broadly defined means an action-oriented concern for others’ well-being, and it is much more easily activated where direct human contact is involved.
When building AI, we must widen our circle of concern to include those who are not present, represented or offered a voice in the process. Those who are adversely affected by our actions in building or using AI tools should be taken into account, and in the same way, someone causing environmental harm can now attempt to offset those impacts. Those causing unintended consequences when building AI should accept their share of responsibility and contribute to some form of mitigation, whether directly or indirectly.
Q: The idea of AI being a mirror is mentioned in the book. What does that mean and why is that a concern?
Maynard: Because artificial intelligence is increasingly capable of emulating the things that we think of as making us uniquely human — the way we speak, our thinking and reasoning, our ability to empathize and form relationships, and to solve problems and innovate — it’s becoming a metaphorical mirror that reflects not simply what we look like, but who we believe we are. Of course, AI isn’t aware or “human” as such. But it does an amazing job of feeling human. And because of this, it has the potential to reveal things about ourselves that we didn’t know. It also has the capacity to distort what we see, sometimes without us realizing it.
Q: As an antidote to AI’s threat to humanity, the book offers 21 tools that provide a practical business guide for thriving in an age of this powerful technology. Can you explain them?
Abbott: I’m a big believer in the power of tools based on my background in corporate strategy and entrepreneurship education … and I imagined a book that was at once deeply thoughtful and values-based, while also immensely practical, something like equal parts “The 7 Habits of Highly Effective People,” “The Business Model Canvas” and daily guided meditation.
The intent map is one of the tools that illustrates this with four quadrants. It’s a thinking tool that makes values visible and choices conscious before the momentum of AI and the actions of others make choices for you. For example, the “values” quadrant addresses the question of what we refuse to compromise when using AI, and ... the “guardrails” quadrant asks where do we draw hard lines around what we will and will not compromise on.
The power here lies not in the quadrants, but in how someone uses the relationships between them to make decisions around AI in their life.
Q: What is the danger in over-relying on AI for not just our work, but even in other areas of our lives?
Maynard: We talk a lot about agentic AI at the moment — AI that has the “agency” to make decisions and complete tasks on its own, whether that’s managing your calendar and email inbox ... or making strategic organizational decisions. From the perspective of increasing efficiency and productivity, this sounds great. At the same time, we risk losing our own human agency as we give it away to AI — especially if we do it without thinking about the consequences. In the book, we develop and apply four postures that are designed to help avoid this: curiosity, clarity, intentionality and care.
Q: What human qualities do you think will become more important as AI advances?
Abbot: Self-reliance in the Emersonian sense, because Emerson’s self-reliance wasn’t merely about independence in the mundane sense, e.g. doing your own chores. It was a spiritual and intellectual manifesto about maintaining sovereignty of mind in the face of conformity, convenience and delegation to systems of thought outside oneself. In the age of AI, that idea isn’t nostalgic; it’s necessary and it’s urgent.
Q: What role did AI play in writing this book?
Maynard: Rather a lot! We agreed early on in the process that, given the urgency with which the book was needed, it made sense to use AI to accelerate the writing process. But we also realized that we needed to walk the walk and use the tools we were writing about. And so we developed a quite complex and sophisticated approach to working with AI to create the first draft of the book.
We talk a little about this process in the book, but the end result is a deeply human initiative that reflects what is possible while working with curiosity, clarity, intention and care with AI.
What I still find amazing is that, while we guided our AI “ghost writer” very intentionally, the stories in the book and the tools they help develop are all the products of AI. They were all seeded by us, and subsequently refined by us. But they are also a testament to what is possible through working creatively and iteratively with AI.
Q: What do you hope people will come away with after reading the book and will its contents be used by ASU students?
Maynard: I hope people will approach the book as a practical guide. Something that they bookmark and come back to and apply in their everyday lives. More importantly, I hope people come away realizing that AI isn’t something that simply happens to them but is something that can help them learn to thrive ... on their own terms and in their own way.
The hope, of course, is that the ideas and tools here are part of every student’s journey at ASU as we equip them to thrive in an AI future. The book is ... written in a way that lends itself to being integrated into curricula. In the AI world, we’re in the process of building. It’s the students who understand how to thrive without losing sight of who they are — who will be the catalysts for change. And achieving this at scale? Isn’t this part of what ASU is all about?
More Science and technology
When you’re happy, your dog might look sad
When people are feeling happy, they’re more likely to see other people as happy. If they’re feeling down, they tend to view other people as sad. But when dealing with dogs, this well-established…
New research by ASU paleoanthropologists: 2 ancient human ancestors were neighbors
In 2009, scientists found eight bones from the foot of an ancient human ancestor within layers of million-year-old sediment in the Afar Rift in Ethiopia. The team, led by Arizona State University…
When facts aren’t enough
In the age of viral headlines and endless scrolling, misinformation travels faster than the truth. Even careful readers can be swayed by stories that sound factual but twist logic in subtle ways that…